Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
As the volume of data is rapidly produced every day, there is a need for the storage media to keep up with the growth rate of digital data created. Despite emerging storage solutions that have been proposed such as Solid State Drive (SSD) with quad-level cells (QLC) or penta-level cells (PLC), Shingled Magnetic Recording (SMR), LTO-tape, etc., these technologies still fall short of meeting the demand for preserving huge amounts of available data. Moreover, current storage solutions have a limited lifespan, often lasting just a few years. To ensure long-term preservation, data must be continuously migrated to new storage drives. Therefore, there is a need for alternative storage technologies that not only offer high storage capacity but also long persistency. In contrast to existing storage devices, Synthetic Deoxyribonucleic Acid (DNA) storage emerges as a promising candidate for archival data storage, offering both high-density storage capacity and the potential for long-term data preservation. In this paper, we will introduce DNA storage, discuss the capabilities of DNA storage based on the current biotechnologies, discuss possible improvements in DNA storage, and explore further improvements with future technologies. Currently, the limitations of DNA storage are due to its weaknesses including high error rates, long access latency, etc. In this paper, we will focus on possible DNA storage research issues based on its relevant bio and computer technologies. Also, we will provide potential solutions and forward-looking predictions about the development and the future of DNA storage. We will discuss DNA storage from the following five perspectives: 1) We will describe the basic background of DNA storage including the basic technologies of read/write DNA storage, data access processes such as Polymerase Chain Reaction (PCR) based random access, encoding schemes from digital data to DNA, and required DNA storage format. 2) We will describe the issues of DNA storage based on the current technologies including bio-constraints during the encoding process such as avoiding long homopolymers and containing certain GC contents, different types of errors in synthesis and sequencing processes, low practical capacity with the current technologies, slow read and write performance, and low encoding density for random accesses. 3) Based on the previously mentioned issues, we will summarize the current solutions for each issue, and also give and discuss the potential solutions based on the future technologies. 4) From a system perspective, we will discuss how the DNA storage system will look if the DNA storage becomes commercialized and is widely equipped in archive systems. Some questions will be discussed including i) How to efficiently index data in DNA storage? ii) What is a good storage hierarchical storage system with DNA storage? iii) What will DNA storage be like with the development of technology? 5) Finally, we will provide a comparison with other competitive technologies.more » « lessFree, publicly-accessible full text available March 13, 2026
-
Data deduplication relies on a chunk index to identify the redundancy of incoming chunks. As backup data scales, it is impractical to maintain the entire chunk index in memory. Consequently, an index lookup needs to search the portion of the on-storage index, causing a dramatic regression of index lookup throughput. Existing studies propose to search a subset of the whole index (partial index) to limit the storage I/Os and guarantee a high index lookup throughput. However, several core factors of designing partial indexing are not fully exploited. In this paper, we first comprehensively investigate the trade-offs of using different meta-groups, sampling methods, and meta-group selection policies for a partial index. We then propose a Collaborative Partial Index (CPI) which takes advantage of two meta-groups including recipe-segment and container-catalog to achieve more efficient and effective unique chunk identification. CPI further introduces a hook-entry sharing technology and a two-stage eviction policy to reduce memory usage without hurting the deduplication ratio. According to evaluation, with the same constraints of memory usage and storage I/O, CPI achieves a 1.21x-2.17x higher deduplication ratio than the state-of-the-art partial indexing schemes. Alternatively, CPI achieves 1.8X-4.98x higher index lookup throughput than others when the same deduplication ratio is achieved. Compared with full indexing, CPI's maximum deduplication ratio is only 4.07% lower but its throughput is 37.1x - 122.2x of that of full indexing depending on different storage I/O constraints in our evaluation cases.more » « lessFree, publicly-accessible full text available February 1, 2026
-
Deoxyribonucleic Acid (DNA), with its ultra-high storage density and long durability, is a promising long-term archival storage medium and is attracting much attention today. A DNA storage system encodes and stores digital data with synthetic DNA sequences and decodes DNA sequences back to digital data via sequencing. Many encoding schemes have been proposed to enlarge DNA storage capacity by increasing DNA encoding density. However, only increasing encoding density is insufficient because enhancing DNA storage capacity is a multifaceted problem. This paper assumes that random accesses are necessary for practical DNA archival storage. We identify all factors affecting DNA storage capacity under current technologies and systematically investigate the practical DNA storage capacity with several popular encoding schemes. The investigation result shows the collision between primers and DNA payload sequences is a major factor limiting DNA storage capacity. Based on this discovery, we designed a new encoding scheme called Collision Aware Code (CAC) to trade some encoding density for the reduction of primer-payload collisions. Compared with the best result among the five existing encoding schemes, CAC can extricate 120% more primers from collisions and increase the DNA tube capacity from 211.96 GB to 295.11 GB. Besides, we also evaluate CAC's recoverability from DNA storage errors. The result shows CAC is comparable to those of existing encoding schemes.more » « less
An official website of the United States government
